Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 52
Filtrar
Adicionar filtros

Tipo de documento
Intervalo de ano
1.
Cmc-Computers Materials & Continua ; 75(3):5159-5176, 2023.
Artigo em Inglês | Web of Science | ID: covidwho-20244984

RESUMO

The diagnosis of COVID-19 requires chest computed tomography (CT). High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease, so it is of clinical importance to study super-resolution (SR) algorithms applied to CT images to improve the reso-lution of CT images. However, most of the existing SR algorithms are studied based on natural images, which are not suitable for medical images;and most of these algorithms improve the reconstruction quality by increasing the network depth, which is not suitable for machines with limited resources. To alleviate these issues, we propose a residual feature attentional fusion network for lightweight chest CT image super-resolution (RFAFN). Specifically, we design a contextual feature extraction block (CFEB) that can extract CT image features more efficiently and accurately than ordinary residual blocks. In addition, we propose a feature-weighted cascading strategy (FWCS) based on attentional feature fusion blocks (AFFB) to utilize the high-frequency detail information extracted by CFEB as much as possible via selectively fusing adjacent level feature information. Finally, we suggest a global hierarchical feature fusion strategy (GHFFS), which can utilize the hierarchical features more effectively than dense concatenation by progressively aggregating the feature information at various levels. Numerous experiments show that our method performs better than most of the state-of-the-art (SOTA) methods on the COVID-19 chest CT dataset. In detail, the peak signal-to-noise ratio (PSNR) is 0.11 dB and 0.47 dB higher on CTtest1 and CTtest2 at x3 SR compared to the suboptimal method, but the number of parameters and multi-adds are reduced by 22K and 0.43G, respectively. Our method can better recover chest CT image quality with fewer computational resources and effectively assist in COVID-19.

2.
Healthcare (Basel) ; 11(10)2023 May 10.
Artigo em Inglês | MEDLINE | ID: covidwho-20238731

RESUMO

Convolutional neural networks (CNNs) have shown promise in accurately diagnosing coronavirus disease 2019 (COVID-19) and bacterial pneumonia using chest X-ray images. However, determining the optimal feature extraction approach is challenging. This study investigates the use of fusion-extracted features by deep networks to improve the accuracy of COVID-19 and bacterial pneumonia classification with chest X-ray radiography. A Fusion CNN method was developed using five different deep learning models after transferred learning to extract image features (Fusion CNN). The combined features were used to build a support vector machine (SVM) classifier with a RBF kernel. The performance of the model was evaluated using accuracy, Kappa values, recall rate, and precision scores. The Fusion CNN model achieved an accuracy and Kappa value of 0.994 and 0.991, with precision scores for normal, COVID-19, and bacterial groups of 0.991, 0.998, and 0.994, respectively. The results indicate that the Fusion CNN models with the SVM classifier provided reliable and accurate classification performance, with Kappa values no less than 0.990. Using a Fusion CNN approach could be a possible solution to enhance accuracy further. Therefore, the study demonstrates the potential of deep learning and fusion-extracted features for accurate COVID-19 and bacterial pneumonia classification with chest X-ray radiography.

3.
Biomedical Signal Processing and Control ; 85:105079, 2023.
Artigo em Inglês | ScienceDirect | ID: covidwho-20230656

RESUMO

Combining transformers and convolutional neural networks is considered one of the most important directions for tackling medical image segmentation problems. To learn the long-range dependencies and local contexts, previous approaches embedded a convolutional layer into feedforward neural network inside the transformer block. However, a common issue is the instability during training since large differences in amplitude across layers by pre-layer normalization. Furthermore, multi-scale features were directly fused using the transformer from the encoder to decoder, which could disrupt valuable information for segmentation. To address these concerns, we propose Advanced TransFormer (ATFormer), a novel hybrid architecture that combines convolutional neural networks and transformers for medical image segmentation. First, the traditional transformer block has been refined into an Advanced Transformer Block, which adopts post-layer normalization to obtain mild activation values and employs the scaled cosine attention with shifted window for accurate spatial information. Second, the Progressive Guided Fusion module is introduced to make multi-scale features more discriminative while reducing the computational complexity. Experimental results on the ACDC, COVID-19 CT-Seg, and Tumor datasets demonstrate the significant advantage of ATFormer over existing methods that rely solely on convolutional neural networks, transformers, or their combination.

4.
2nd IEEE International Conference on Electrical Engineering, Big Data and Algorithms, EEBDA 2023 ; : 1353-1358, 2023.
Artigo em Inglês | Scopus | ID: covidwho-2320898

RESUMO

Wearing a mask during the COVID-19 epidemic can effectively prevent the spread of the virus. In view of the problems of small target size, crowd blocking each other and dense arrangement of targets in crowded places, a target detection algorithm based on the improved YOLOv5m model is proposed to achieve efficient detection of whether a mask is worn or not. This paper introduces four attention mechanisms in the feature extraction network based on the YOLOv5m model to suppress irrelevant information, enhance the information representation of the feature map, and improve the detection capability of the model for small-scale targets. The experimental results showed that the introduction of the SE module increased the mAP value of the original network by 9.3 percentage points, the most significant increase among the four attention mechanisms. And then a dual-scale feature fusion network is used in the Neck layer, giving different weights to the feature layers to convey more effective feature information. In the image pre-processing, the Mosaic method was used for data enhancement, and the CIoU loss function was used for coordinate frame positioning in the prediction layer. Experiments on the improved YOLOv5m algorithm demonstrate that the mean recognition accuracy of the method improves by 10.7 percentage points over the original method while maintaining the original model size and detection speed, and better solves the problems of small scale, dense arrangement and mutual occlusion of targets in mask wearing detection tasks in crowded places. © 2023 IEEE.

5.
Ieee Internet of Things Journal ; 10(4):2802-2810, 2023.
Artigo em Inglês | Web of Science | ID: covidwho-2308234

RESUMO

This article introduced a new deep learning framework for fault diagnosis in electrical power systems. The framework integrates the convolution neural network and different regression models to visually identify which faults have occurred in electric power systems. The approach includes three main steps: 1) data preparation;2) object detection;and 3) hyperparameter optimization. Inspired by deep learning and evolutionary computation (EC) techniques, different strategies have been proposed in each step of the process. In addition, we propose a new hyperparameters optimization model based on EC that can be used to tune parameters of our deep learning framework. In the validation of the framework's usefulness, experimental evaluation is executed using the well known and challenging VOC 2012, the COCO data sets, and the large NESTA 162-bus system. The results show that our proposed approach significantly outperforms most of the existing solutions in terms of runtime and accuracy.

6.
Electronics ; 12(8):1911, 2023.
Artigo em Inglês | ProQuest Central | ID: covidwho-2303663

RESUMO

To address the current problems of the incomplete classification of mask-wearing detection data, small-target miss detection, and the insufficient feature extraction capabilities of lightweight networks dealing with complex faces, a lightweight method with an attention mechanism for detecting mask wearing is presented in this paper. This study incorporated an "incorrect_mask” category into the dataset to address incomplete classification. Additionally, the YOLOv4-tiny model was enhanced with a prediction feature layer and feature fusion execution, expanding the detection scale range and improving the performance on small targets. A CBAM attention module was then introduced into the feature enhancement network, which re-screened the feature information of the region of interest to retain important feature information and improve the feature extraction capabilities. Finally, a focal loss function and an improved mosaic data enhancement strategy were used to enhance the target classification performance. The experimental results of classifying three objects demonstrate that the lightweight model's detection speed was not compromised while achieving a 2.08% increase in the average classification precision, which was only 0.69% lower than that of the YOLOv4 network. Therefore, this approach effectively improves the detection effect of the lightweight network for mask-wearing.

7.
Evolving Systems ; 2023.
Artigo em Inglês | Scopus | ID: covidwho-2269831

RESUMO

The lungs of patients with COVID-19 exhibit distinctive lesion features in chest CT images. Fast and accurate segmentation of lesion sites from CT images of patients' lungs is significant for the diagnosis and monitoring of COVID-19 patients. To this end, we propose a progressive dense residual fusion network named PDRF-Net for COVID-19 lung CT segmentation. Dense skip connections are introduced to capture multi-level contextual information and compensate for the feature loss problem in network delivery. The efficient aggregated residual module is designed for the encoding-decoding structure, which combines a visual transformer and the residual block to enable the network to extract richer and minute-detail features from CT images. Furthermore, we introduce a bilateral channel pixel weighted module to progressively fuse the feature maps obtained from multiple branches. The proposed PDRF-Net obtains good segmentation results on two COVID-19 datasets. Its segmentation performance is superior to baseline by 11.6% and 11.1%, and outperforming other comparative mainstream methods. Thus, PDRF-Net serves as an easy-to-train, high-performance deep learning model that can realize effective segmentation of the COVID-19 lung CT images. © 2023, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.

8.
Journal of Graphics ; 44(1):16-25, 2023.
Artigo em Chinês | Scopus | ID: covidwho-2268848

RESUMO

Wearing masks correctly during the COVID-19 pandemic can effectively prevent the spread of the virus. In response to the detection challenge posed by dense crowds and small detection targets in public places, a mask wearing detection algorithm based on the YOLOv5s model and integrating an attention mechanism was proposed. Four attention mechanisms were introduced into the backbone network of the YOLOv5s model to respectively suppress irrelevant information, enhance the ability of the feature map to express information, and improve the modelʹs detection ability for small-scale targets. Experimental results show that the introduction of the convolutional block attention module could increase the mAP value by 6.9 percentage points compared with the original network, with the greatest improvement among the four attention mechanisms. The normalization-based attention module also showed excellent performance, with the least quantity of parameters while losing a small amount of mAP. Through comparative experiments, the GIoU loss function was selected to calculate the bounding box regression loss, resulting in further improvements to positioning accuracy, resulting in an mAP value that was improved by 8.5 percentage points compared to the original network. The detection results of the improved model in different scenarios prove the accuracy and practicability of the algorithm for small target detection. © 2023, Editorial of Board of Journal of Graphics. All rights reserved.

9.
Expert Systems: International Journal of Knowledge Engineering and Neural Networks ; 39(5):1-15, 2022.
Artigo em Inglês | APA PsycInfo | ID: covidwho-2250718

RESUMO

The novel coronavirus (COVID-19) has an enormous impact on the daily lives and health of people residing in more than 200 nations. This article proposes a deep learning-based system for the rapid diagnosis of COVID-19. Chest x-ray radiograph images were used because recent findings revealed that these images contain salient features about COVID-19 disease. Transfer learning was performed using different pre-trained convolutional neural networks models for binary (normal and COVID-19) and triple (normal, COVID-19 and viral pneumonia) class problems. Deep features were extracted from a fully connected layer of the ResNET50v2 model and feature dimension was reduced through feature reduction methods. Feature fusion of feature sets reduced through analysis of variance (ANOVA) and mutual information feature selection (MIFS) was fed to Fine K-nearest neighbour to perform binary classification. Similarly, serial feature fusion of MIFS and chi-square features were utilized to train Medium Gaussian Support Vector Machines to distinguish normal, COVID-19 and viral pneumonia cases. The proposed framework yielded accuracies of 99.5% for binary and 95.5% for triple class experiments. The proposed model shows better performance than the existing methods, and this research has the potential to assist medical professionals to enhance the diagnostic ability to detect coronavirus disease. (PsycInfo Database Record (c) 2022 APA, all rights reserved)

10.
22nd IEEE International Conference on Data Mining Workshops, ICDMW 2022 ; 2022-November:349-357, 2022.
Artigo em Inglês | Scopus | ID: covidwho-2288986

RESUMO

COVID-19 has been rampant across the globe since it was discovered in 2020, but the method of virus detection still lacks efficiency and requires human resources. Given the slow delivery of the PCR test and the many possible false negatives of the rapid tests, medical imaging such as a chest computed tomography (CT) scan or chest X-ray (CXR) is an alternative and efficient way to detect the coronavirus accurately. For the past two years, many researchers have proposed different deep learning methods for COVID-19 detection using CT scans or CXR images. Due to the lack of available data, our study aims to propose a new deep learning framework VGG-FusionNet that takes advantage of integrating features from both CT scan and CXR images while avoiding some pitfalls from previous studies, including a high risk of bias due to lack of demographic information for the dataset, poor reproducibility, and no evaluation on different data sources to study the generalizability. Specifically, we use the convolutional layers of GoogLeNet, ResNet, and VGG to extract features from CT scan and CXR images and fuse them before training through fully connected layers. The result shows that using VGG's convolutional layers achieves the best overall performance with an accuracy of 0.93. Our proposed framework outperforms the deep learning models, using features from CT scans or CXR. © 2022 IEEE.

11.
9th IEEE International Conference on Data Science and Advanced Analytics, DSAA 2022 ; 2022.
Artigo em Inglês | Scopus | ID: covidwho-2287763

RESUMO

With the rapid development of computer computing power and the severe challenges brought by the COVID-19, e-learning, as the optimal solution for most students and other learner groups, plays an extremely important role in maintaining the normal operation of educational institutions. As the user community continues to expand, it has become increasingly important to guarantee the quality of teaching and learning. One way to ensure the quality of online education is to construct e-learning behavior data to build learning performance predictors. Still, most studies have ignored the intrinsic correlation between e-learning behaviors. Therefore, this study proposes an adaptive feature fusion-based e-learning performance prediction model (SA-FGDEM) relying on the theoretical model of learning behav-ior classification. The experimental results show that the feature space mined by fine-grained differential evolution algorithm and the adaptive feature fusion combined with differential evolution algorithm can support e-learning performance prediction more effectively and is better than the benchmark method. © 2022 IEEE.

12.
6th International Joint Conference on Asia-Pacific Web (APWeb) and Web-Age Information Management (WAIM), APWeb-WAIM 2022 ; 13421 LNCS:106-120, 2023.
Artigo em Inglês | Scopus | ID: covidwho-2287285

RESUMO

Inferring individual human mobility at a given time is not only beneficial for personalized location-based services, but also crucial for trajectory tracking of the confirmed cases in the context of the COVID-19 pandemic. However, individual generated trajectory data using mobile Apps is characterized by implicit feedback, which means only a few individual-location interactions can be observed. Existing studies based on such sparse trajectory data are not sufficient to infer individual's missing mobility in his/her historical trajectory and further predict individual's future mobility given a specific time. To address this concern, in this paper, we propose a temporal-context-aware approach that incorporates multiple factors to model the time sensitive individual-location interactions in a bottom-up way. Based on the idea of feature fusion, the driving effect of heterogeneous information such as time, space, category and sentiment on individual's mobile behavior is gradually strengthened, so that the temporal context when a check-in occurs can be accurately depicted. We leverage Bayesian Personalized Ranking (BPR) to optimize the model, where a novel negative sampling method is employed to alleviate data sparseness. Based on three real-world datasets, we evaluate the proposed approach with regard to two different tasks, namely, missing mobility inference and future mobility prediction at a given time. The empirical results encouragingly demonstrate that our approach outperforms multiple baselines in terms of two evaluation metrics, i.e., accuracy and average percentile rank. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

13.
IEEE Access ; 11:16621-16630, 2023.
Artigo em Inglês | Scopus | ID: covidwho-2281059

RESUMO

Medical image segmentation is a crucial way to assist doctors in the accurate diagnosis of diseases. However, the accuracy of medical image segmentation needs further improvement due to the problems of many noisy medical images and the high similarity between background and target regions. The current mainstream image segmentation networks, such as TransUnet, have achieved accurate image segmentation. Still, the encoders of such segmentation networks do not consider the local connection between adjacent chunks and lack the interaction of inter-channel information during the upsampling of the decoder. To address the above problems, this paper proposed a dual-encoder image segmentation network, including HarDNet68 and Transformer branch, which can extract the local features and global feature information of the input image, allowing the segmentation network to learn more image information, thus improving the effectiveness and accuracy of medical segmentation. In this paper, to realize the fusion of image feature information of different dimensions in two stages of encoding and decoding, we propose a feature adaptation fusion module to fuse the channel information of multi-level features and realize the information interaction between channels, and then improve the segmentation network accuracy. The experimental results on CVC-ClinicDB, ETIS-Larib, and COVID-19 CT datasets show that the proposed model performs better in four evaluation metrics, Dice, Iou, Prec, and Sens, and achieves better segmentation results in both internal filling and edge prediction of medical images. Accurate medical image segmentation can assist doctors in making a critical diagnosis of cancerous regions in advance, ensure cancer patients receive timely targeted treatment, and improve their survival quality. © 2013 IEEE.

14.
Neural Comput Appl ; 35(18): 13503-13527, 2023.
Artigo em Inglês | MEDLINE | ID: covidwho-2263231

RESUMO

Covid text identification (CTI) is a crucial research concern in natural language processing (NLP). Social and electronic media are simultaneously adding a large volume of Covid-affiliated text on the World Wide Web due to the effortless access to the Internet, electronic gadgets and the Covid outbreak. Most of these texts are uninformative and contain misinformation, disinformation and malinformation that create an infodemic. Thus, Covid text identification is essential for controlling societal distrust and panic. Though very little Covid-related research (such as Covid disinformation, misinformation and fake news) has been reported in high-resource languages (e.g. English), CTI in low-resource languages (like Bengali) is in the preliminary stage to date. However, automatic CTI in Bengali text is challenging due to the deficit of benchmark corpora, complex linguistic constructs, immense verb inflexions and scarcity of NLP tools. On the other hand, the manual processing of Bengali Covid texts is arduous and costly due to their messy or unstructured forms. This research proposes a deep learning-based network (CovTiNet) to identify Covid text in Bengali. The CovTiNet incorporates an attention-based position embedding feature fusion for text-to-feature representation and attention-based CNN for Covid text identification. Experimental results show that the proposed CovTiNet achieved the highest accuracy of 96.61±.001% on the developed dataset (BCovC) compared to the other methods and baselines (i.e. BERT-M, IndicBERT, ELECTRA-Bengali, DistilBERT-M, BiLSTM, DCNN, CNN, LSTM, VDCNN and ACNN).

15.
Biomed Signal Process Control ; 83: 104724, 2023 May.
Artigo em Inglês | MEDLINE | ID: covidwho-2246224

RESUMO

COVID-19 has put all of humanity in a health dilemma as it spreads rapidly. For many infectious diseases, the delay of detection results leads to the spread of infection and an increase in healthcare costs. COVID-19 diagnostic methods rely on a large number of redundant labeled data and time-consuming data training processes to obtain satisfactory results. However, as a new epidemic, obtaining large clinical datasets is still challenging, which will inhibit the training of deep models. And a model that can really rapidly diagnose COVID-19 at all stages of the model has still not been proposed. To address these limitations, we combine feature attention and broad learning to propose a diagnostic system (FA-BLS) for COVID-19 pulmonary infection, which introduces a broad learning structure to address the slow diagnosis speed of existing deep learning methods. In our network, transfer learning is performed with ResNet50 convolutional modules with fixed weights to extract image features, and the attention mechanism is used to enhance feature representation. After that, feature nodes and enhancement nodes are generated by broad learning with random weights to adaptly select features for diagnosis. Finally, three publicly accessible datasets were used to evaluate our optimization model. It was determined that the FA-BLS model had a 26-130 times faster training speed than deep learning with a similar level of accuracy, which can achieve a fast and accurate diagnosis, achieve effective isolation from COVID-19 and the proposed method also opens up a new method for other types of chest CT image recognition problems.

16.
29th IEEE International Conference on Image Processing, ICIP 2022 ; : 4098-4102, 2022.
Artigo em Inglês | Scopus | ID: covidwho-2232489

RESUMO

Since computed tomography (CT) provides the most sensitive radiological technique for diagnosing COVID-19, CT has been used as an efficient and necessary aided diagnosis. However, the size and number of publicly available COVID-19 imaging datasets are limited and have problems such as low data volume, easy overfitting for training, and significant differences in the characteristics of lesions at different scales. Our work presents an image segmentation network, Pyramid-and-GAN-UNet (PGUNet), to support the segmentation of COVID-19 lesions by combining feature pyramid and generative adversarial network (GAN). Using GAN, the segmentation network can learn more abundant high-level features and increase the generalization ability. The module of the feature pyramid is used to solve the differences between image features at different levels. Compared with the current mainstream method, our experimental results show that the proposed network achieved more competitive performances on the CT slice datasets of the COVID-19 CT Segmentation dataset and CC-CCII dataset. © 2022 IEEE.

17.
29th IEEE International Conference on Image Processing, ICIP 2022 ; : 4098-4102, 2022.
Artigo em Inglês | Scopus | ID: covidwho-2223121

RESUMO

Since computed tomography (CT) provides the most sensitive radiological technique for diagnosing COVID-19, CT has been used as an efficient and necessary aided diagnosis. However, the size and number of publicly available COVID-19 imaging datasets are limited and have problems such as low data volume, easy overfitting for training, and significant differences in the characteristics of lesions at different scales. Our work presents an image segmentation network, Pyramid-and-GAN-UNet (PGUNet), to support the segmentation of COVID-19 lesions by combining feature pyramid and generative adversarial network (GAN). Using GAN, the segmentation network can learn more abundant high-level features and increase the generalization ability. The module of the feature pyramid is used to solve the differences between image features at different levels. Compared with the current mainstream method, our experimental results show that the proposed network achieved more competitive performances on the CT slice datasets of the COVID-19 CT Segmentation dataset and CC-CCII dataset. © 2022 IEEE.

18.
Health Inf Sci Syst ; 11(1): 10, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: covidwho-2220291

RESUMO

Medical image segmentation is a challenging task due to the high variation in shape, size and position of infections or lesions in medical images. It is necessary to construct multi-scale representations to capture image contents from different scales. However, it is still challenging for U-Net with a simple skip connection to model the global multi-scale context. To overcome it, we proposed a dense skip-connection with cross co-attention in U-Net to solve the semantic gaps for an accurate automatic medical image segmentation. We name our method MCA-UNet, which enjoys two benefits: (1) it has a strong ability to model the multi-scale features, and (2) it jointly explores the spatial and channel attentions. The experimental results on the COVID-19 and IDRiD datasets suggest that our MCA-UNet produces more precise segmentation performance for the consolidation, ground-glass opacity (GGO), microaneurysms (MA) and hard exudates (EX). The source code of this work will be released via https://github.com/McGregorWwww/MCA-UNet/.

19.
Ieee Transactions on Computational Social Systems ; 2022.
Artigo em Inglês | Web of Science | ID: covidwho-2213377

RESUMO

Inferring individual human mobility at a given time is not only beneficial for personalized location-based services but also crucial for tracking trajectory of the confirmed cases in the COVID-19 pandemic. However, individual-generated trajectory data from mobile Apps are characterized by implicit feedback, which means only a few individual-location interactions can be observed. Existing studies based on such sparse trajectory data are not sufficient to infer an individual's missing mobility in his/her historical trajectory and further predict an individual's future mobility at a given time under a unified framework. To address this concern, in this article, we propose a temporal-context-aware framework that incorporates multiple factors to model the time-sensitive individual-location interactions in a bottom-up way. Based on the idea of feature fusion, the driving effect of heterogeneous information on an individual's mobility is gradually strengthened, so that the temporal-spatial context when a check-in occurs can be accurately perceived. We leverage Bayesian personalized ranking (BPR) to optimize the model, where a novel negative sampling method is employed to alleviate data sparseness. Based on three real-world datasets, we evaluate the proposed approach with regard to two different tasks, namely, missing mobility inference and future mobility prediction at a given time. Experimental results encouragingly demonstrate that our approach outperforms multiple baselines in terms of two evaluation metrics. Furthermore, the predictability of individual mobility within different time windows is also revealed.

20.
Front Public Health ; 10: 982289, 2022.
Artigo em Inglês | MEDLINE | ID: covidwho-2215416

RESUMO

The outbreak of coronavirus disease 2019 (COVID-19) has caused massive infections and large death tolls worldwide. Despite many studies on the clinical characteristics and the treatment plans of COVID-19, they rarely conduct in-depth prognostic research on leveraging consecutive rounds of multimodal clinical examination and laboratory test data to facilitate clinical decision-making for the treatment of COVID-19. To address this issue, we propose a multistage multimodal deep learning (MMDL) model to (1) first assess the patient's current condition (i.e., the mild and severe symptoms), then (2) give early warnings to patients with mild symptoms who are at high risk to develop severe illness. In MMDL, we build a sequential stage-wise learning architecture whose design philosophy embodies the model's predicted outcome and does not only depend on the current situation but also the history. Concretely, we meticulously combine the latest round of multimodal clinical data and the decayed past information to make assessments and predictions. In each round (stage), we design a two-layer multimodal feature extractor to extract the latent feature representation across different modalities of clinical data, including patient demographics, clinical manifestation, and 11 modalities of laboratory test results. We conduct experiments on a clinical dataset consisting of 216 COVID-19 patients that have passed the ethical review of the medical ethics committee. Experimental results validate our assumption that sequential stage-wise learning outperforms single-stage learning, but history long ago has little influence on the learning outcome. Also, comparison tests show the advantage of multimodal learning. MMDL with multimodal inputs can beat any reduced model with single-modal inputs only. In addition, we have deployed the prototype of MMDL in a hospital for clinical comparison tests and to assist doctors in clinical diagnosis.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , Gravidade do Paciente , Pacientes , Surtos de Doenças
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA